To perform outdoor autonomous visual navigation and search, a robot may leverage satellite imagery as a prior map.
This can help inform high-level search and exploration strategies, even when such images lack sufficient resolution to allow for visual recognition of targets.
However, approaches which leverage large Vision Language Models (VLMs) for generalization may yield inaccurate outputs due to hallucination, leading to inefficient search.
To address this challenge, we introduce Search-TTA, a multimodal test-time adaptation framework with a flexible plug-and-play interface compatible with various input modalities (e.g. image, text, sound) and planning methods.
First, we pretrain a satellite image encoder to align with CLIP's visual encoder to output probability distributions of target presence used for visual search.
Second, our framework dynamically refines CLIP’s predictions during search using a test-time adaptation mechanism that performs uncertainty-weighted gradient updates online.
To train and evaluate Search-TTA, we curate AVS-Bench, a visual search dataset based on internet-scale ecological data that contains 380k images.
We find that Search-TTA improves planner performance by up to 30.0%, particularly in cases with poor initial CLIP predictions due to OOD scenarios and limited training data.
It also performs comparably with significantly larger VLMs, and achieves zero-shot generalization to unseen modalities.